机器学习的最新进展表明,通过自我监督的学习获得的预训练表示形式可以通过小型培训数据实现高精度。与视觉和自然语言处理域不同,基于IMU的应用程序的预培训是具有挑战性的,因为只有少数公开可用的数据集具有足够的规模和多样性来学习可推广的表示。为了克服这个问题,我们提出了IMG2IMU,这是一种新颖的方法,可以适应从大规模图像到不同弹药的IMU感应任务的预训练表示。我们将传感器数据转换为可解释的频谱图,以便模型利用从视觉中获得的知识。此外,我们将对比度学习应用于我们旨在学习用于解释传感器数据的表示形式。我们对五个IMU感应任务的广泛评估表明,IMG2IMU始终优于基准,这说明视力知识可以纳入一些用于IMU感应任务的学习环境中。
translated by 谷歌翻译
Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation. Consequently, there are many applications of harmonic functions, spanning applications from industrial process optimisation to robotic path planning and the calculation of first exit times of random walks. Despite their ubiquity and relevance, there have been few attempts to develop effective means of representing harmonic functions in the context of machine learning architectures, either in machine learning on classical computers, or in the nascent field of quantum machine learning. Architectures which impose or encourage an inductive bias towards harmonic functions would facilitate data-driven modelling and the solution of inverse problems in a range of applications. For classical neural networks, it has already been established how leveraging inductive biases can in general lead to improved performance of learning algorithms. The introduction of such inductive biases within a quantum machine learning setting is instead still in its nascent stages. In this work, we derive exactly-harmonic (conventional- and quantum-) neural networks in two dimensions for simply-connected domains by leveraging the characteristics of holomorphic complex functions. We then demonstrate how these can be approximately extended to multiply-connected two-dimensional domains using techniques inspired by domain decomposition in physics-informed neural networks. We further provide architectures and training protocols to effectively impose approximately harmonic constraints in three dimensions and higher, and as a corollary we report divergence-free network architectures in arbitrary dimensions. Our approaches are demonstrated with applications to heat transfer, electrostatics and robot navigation, with comparisons to physics-informed neural networks included.
translated by 谷歌翻译
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
translated by 谷歌翻译
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense annotations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and adapting the learned image-level understanding to the segmentation task. However, these methods based on CL have a discrepancy since it only considers image-text level alignment in training time, while the segmentation task requires region-text level alignment at test time. In this paper, we propose a novel Text-grounded Contrastive Learning (TCL) framework to directly align a text and a region described by the text to address the train-test discrepancy. Our method generates a segmentation mask associated with a given text, extracts grounded image embedding from the masked region, and aligns it with text embedding via TCL. The framework addresses the discrepancy by letting the model learn region-text level alignment instead of image-text level alignment and encourages the model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation protocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation performance with large margins in all datasets. Code is available at https://github.com/kakaobrain/tcl.
translated by 谷歌翻译
We present HOReeNet, which tackles the novel task of manipulating images involving hands, objects, and their interactions. Especially, we are interested in transferring objects of source images to target images and manipulating 3D hand postures to tightly grasp the transferred objects. Furthermore, the manipulation needs to be reflected in the 2D image space. In our reenactment scenario involving hand-object interactions, 3D reconstruction becomes essential as 3D contact reasoning between hands and objects is required to achieve a tight grasp. At the same time, to obtain high-quality 2D images from 3D space, well-designed 3D-to-2D projection and image refinement are required. Our HOReeNet is the first fully differentiable framework proposed for such a task. On hand-object interaction datasets, we compared our HOReeNet to the conventional image translation algorithms and reenactment algorithm. We demonstrated that our approach could achieved the state-of-the-art on the proposed task.
translated by 谷歌翻译
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for language models has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger LMs; it sometimes even substantially improves the underlying LM with just a few iterations. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with a previous data preprocessing method and a decoding method known to mitigate privacy risks for LMs, we show that unlearning can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust. We release the code and dataset needed to replicate our results at https://github.com/joeljang/knowledge-unlearning.
translated by 谷歌翻译
显着对象检测(SOD)最近引起了人们的关注,但对高分辨率(HR)图像的研究较少。不幸的是,与低分辨率(LR)图像和注释相比,HR图像及其像素级注释肯定是更耗费劳动力和耗时的。因此,我们建议没有任何HR数据集的HR预测,建议基于图像金字塔的SOD框架,逆显着性金字塔重建网络(INSPYRENET)。我们设计了Inspyrenet,以产生严格的图像金字塔结构,使其能够将多个结果与基于金字塔的图像混合在一起。为了进行HR预测,我们设计了一种金字塔混合方法,该方法从同一图像中从一对LR和HR量表中合成了两个不同的图像金字塔,以克服有效的接受场(ERF)差异。我们对公共LR和HR SOD基准的广泛评估表明,Inspyrenet超过了各种SOD指标和边界准确性的最新方法(SOTA)方法。
translated by 谷歌翻译
尽管在文档理解方面取得了成功,但由于计算中的几个挑战以及如何有效吸收长期多模式输入,因此长期文档理解的实际任务在很大程度上尚未探索。大多数基于变压器的方法仅处理简短的文档,并且由于其过度的计算和内存限制,因此仅使用文本信息来引起注意。为了解决长期文档理解中的这些问题,我们探索了处理1D和新的2D位置引人入胜的不同方法,并以本质上的背景缩短了。实验结果表明,我们提出的模型基于各种评估指标具有此任务的优势。此外,我们的模型仅对注意力进行更改,因此很容易适应任何基于变压器的体系结构。
translated by 谷歌翻译
使用深度学习对胸部射线照相的自动分析具有巨大的潜力,可以增强患者疾病的临床诊断。但是,深度学习模型通常需要大量的带注释的数据来实现高性能 - 通常是医疗领域适应的障碍。在本文中,我们构建了一个利用放射学报告来通过有限的标记数据(少于1000个示例)来改善医学图像分类性能,以提高医学图像分类性能。具体而言,我们检查了捕获图像预告片,以学习以更少的例子进行训练的高质量医学图像表示。在对卷积编码器和变压器解码器进行联合预测之后,我们将学习的编码器转移到各种分类任务中。平均9多种病理学,我们发现我们的模型在标记培训数据受到限制时,比参见和内域监督的预处理的分类性能更高。
translated by 谷歌翻译
图形神经网络(GNN)在高级AI系统中被广泛采用,因为它们在图形数据上的表示能力。即使GNN的解释对于增加对系统的信任至关重要,但由于GNN执行的复杂性,它也是一项挑战。最近,已经提出了许多工作来解决GNN解释中的一些问题。但是,当图形的大小巨大时,它们缺乏概括能力或遭受计算负担。为了应对这些挑战,我们提出了一个多级GNN解释框架,基于观察到GNN是图形数据中多个组件的多模式学习过程。原始问题的复杂性是通过分解为表示为层次结构的多个子部分来放松的。顶级解释旨在指定每个组件对模型执行和预测的贡献,而细粒度的级别则集中于基于知识蒸馏的特征归因和图形结构归因分析。学生模型接受了独立模式的培训,并负责捕获不同的教师行为,后来用于特定的组成部分。此外,我们还旨在实现个性化的解释,因为该框架可以根据用户偏好产生不同的结果。最后,广泛的实验证明了我们提出的方法的有效性和保真度。
translated by 谷歌翻译